303 research outputs found

    Generating large non-singular matrices over an arbitrary field with blocks of full rank

    Full text link
    This note describes a technique for generating large non-singular matrices with blocks of full rank. Our motivation to construct such matrices arises in the white-box implementation of cryptographic algorithms with S-boxes.Comment:

    Photoelectrocatalytic Degradation of Humic Acids Using Codoped TiO 2

    Get PDF
    Cu/N codoped TiO2 films on Ti substrates were successfully prepared by electrochemical method with the goal of enhancing the photoelectrocatalytic activity under visible light. The morphology and composition of the Cu/N codoped films were characterized using field emission scanning electron microscopy (FESEM), X-ray diffraction (XRD), energy dispersive X-ray (EDX), and UV-Vis diffusion reflection spectroscopy (UV-Vis DRS). The photocatalytic activities of the Cu/N codoped TiO2 films were evaluated by the degradation of humic acid. The visible light photocatalytic degradation of humic acid (HA) was tested and Cu/N codoped TiO2 films showed the highest degradation efficiency up to 41.5% after 210 minutes of treatment. It showed that Cu2+ and NH4+ codoped TiO2 film significantly improved the photocatalytic efficiency under the visible light. When +5.0 V anodic bias potential and visible light were simultaneously applied, the degradation efficiency of HA over the Cu/N codoped TiO2 films significantly improved to 93.5% after 210 minutes of treatment

    Can GPT models Follow Human Summarization Guidelines? Evaluating ChatGPT and GPT-4 for Dialogue Summarization

    Full text link
    This study explores the capabilities of prompt-driven Large Language Models (LLMs) like ChatGPT and GPT-4 in adhering to human guidelines for dialogue summarization. Experiments employed DialogSum (English social conversations) and DECODA (French call center interactions), testing various prompts: including prompts from existing literature and those from human summarization guidelines, as well as a two-step prompt approach. Our findings indicate that GPT models often produce lengthy summaries and deviate from human summarization guidelines. However, using human guidelines as an intermediate step shows promise, outperforming direct word-length constraint prompts in some cases. The results reveal that GPT models exhibit unique stylistic tendencies in their summaries. While BERTScores did not dramatically decrease for GPT outputs suggesting semantic similarity to human references and specialised pre-trained models, ROUGE scores reveal grammatical and lexical disparities between GPT-generated and human-written summaries. These findings shed light on the capabilities and limitations of GPT models in following human instructions for dialogue summarization

    Evaluating Emotional Nuances in Dialogue Summarization

    Full text link
    Automatic dialogue summarization is a well-established task that aims to identify the most important content from human conversations to create a short textual summary. Despite recent progress in the field, we show that most of the research has focused on summarizing the factual information, leaving aside the affective content, which can yet convey useful information to analyse, monitor, or support human interactions. In this paper, we propose and evaluate a set of measures PEmoPEmo, to quantify how much emotion is preserved in dialog summaries. Results show that, summarization models of the state-of-the-art do not preserve well the emotional content in the summaries. We also show that by reducing the training set to only emotional dialogues, the emotional content is better preserved in the generated summaries, while conserving the most salient factual information

    Deep Domain-Adversarial Image Generation for Domain Generalisation

    Get PDF
    Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on \emph{Deep Domain-Adversarial Image Generation} (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.Comment: 8 page
    • …
    corecore